2,317 research outputs found

    Explanation in Science

    Get PDF
    Scientific explanation is an important goal of scientific practise. Philosophers have proposed a striking diversity of seemingly incompatible accounts of explanation, from deductive-nomological to statistical relevance, unification, pragmatic, causal-mechanical, mechanistic, causal intervention, asymptotic, and model-based accounts. In this dissertation I apply two novel methods to reexamine our evidence about scientific explanation in practise and thereby address the fragmentation of philosophical accounts. I start by collecting a data set of 781 articles from one year of the journal Science. Using automated text mining techniques I measure the frequency and distribution of several groups of philosophically interesting words, such as explain , cause , evidence , theory , law , mechanism , and model . I show that explain words are much more common in scientific writing than in other genres, occurring in roughly half of all articles, and that their use is very often qualified or negated. These results about the use of words complement traditional conceptual analysis. Next I use random samples from the data set to develop a large number of small case studies across a wide range of scientific disciplines. I use a sample of explain sentences to develop and defend a new general philosophical account of scientific explanation, and then test my account against a larger set of randomly sampled sentences and abstracts. Five coarse categories can classify the explanans and explananda of my cases: data, entities, kinds, models, and theories. The pair of the categories of the explanans and explanandum indicates the form of an explanation. The explain-relation supports counterfactual reasoning about the dependence of qualities of the explanandum on qualities of the explanans. But for each form there is a different core relation between explanans and explanandum that supports the explain-relation. Causation, modelling, and argument are the core relations for different forms of scientific explanation between different categories of explanans and explananda. This flexibility allows me to resolve some of the fragmentation in the philosophical literature. I provide empirical evidence to show that my general philosophical account successfully describes a wide range of scientific practise across a large number of scientific disciplines

    Developing the Quantitative Histopathology Image Ontology : A case study using the hot spot detection problem

    Get PDF
    Interoperability across data sets is a key challenge for quantitative histopathological imaging. There is a need for an ontology that can support effective merging of pathological image data with associated clinical and demographic data. To foster organized, cross-disciplinary, information-driven collaborations in the pathological imaging field, we propose to develop an ontology to represent imaging data and methods used in pathological imaging and analysis, and call it Quantitative Histopathological Imaging Ontology – QHIO. We apply QHIO to breast cancer hot-spot detection with the goal of enhancing reliability of detection by promoting the sharing of data between image analysts

    Open Biomedical Ontologies Applied to Prostate Cancer

    Get PDF
    In this presentation we survey preliminary results from the Interdisciplinary Prostate Ontology Project (IPOP), in which ontologies from the Open Biomedical Ontologies (OBO) library have been used to annotate clinical reports about prostate cancer. First we discuss why we rejected several controlled vocabularies, including SNOMED, DICOM, and RadLex, preferring instead to use the OBO library. We then briefly describe the database-backed website we have created around the relevant OBO ontologies, and provide excerpts of reports from radiology, surgery, and pathology which we have hyperlinked to the ontology terms. This method allows us to discover which relevant terms exist in the OBO library, and which do not. The final section of this paper discusses these gaps in the OBO library and considers methods of filling them

    EDN-LD: A simple linked data tool

    Get PDF
    ABSTRACT EDN-LD is a set of conventions for representing linked data using Extensible Data Notation (EDN) and a library for conveniently working with those representations in the Clojure programming language. It provides a lightweight alternative to existing linked data tools for many common use cases, much in the spirit of JSON-LD. We present the motivation and design of EDN-LD, and demonstrate how it can clearly and concisely transform tables into triples

    The eXtensible ontology development (XOD) principles and tool implementation to support ontology interoperability

    Full text link
    Abstract Ontologies are critical to data/metadata and knowledge standardization, sharing, and analysis. With hundreds of biological and biomedical ontologies developed, it has become critical to ensure ontology interoperability and the usage of interoperable ontologies for standardized data representation and integration. The suite of web-based Ontoanimal tools (e.g., Ontofox, Ontorat, and Ontobee) support different aspects of extensible ontology development. By summarizing the common features of Ontoanimal and other similar tools, we identified and proposed an “eXtensible Ontology Development” (XOD) strategy and its associated four principles. These XOD principles reuse existing terms and semantic relations from reliable ontologies, develop and apply well-established ontology design patterns (ODPs), and involve community efforts to support new ontology development, promoting standardized and interoperable data and knowledge representation and integration. The adoption of the XOD strategy, together with robust XOD tool development, will greatly support ontology interoperability and robust ontology applications to support data to be Findable, Accessible, Interoperable and Reusable (i.e., FAIR).https://deepblue.lib.umich.edu/bitstream/2027.42/140740/1/13326_2017_Article_169.pd

    A Simple Standard for Sharing Ontological Mappings (SSSOM).

    Get PDF
    Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec. Database URL: http://w3id.org/sssom/spec

    The reinforcing property of ethanol in the rhesus monkey

    Full text link
    Rhesus monkeys received intravenous injections of ethanol during daily sessions contingent on their presses on an available lever. Under the standard conditions, when each response on the lever during a 3-h period each day resulted in an i.v. injection of 0.1 g/kg ethanol, the monkeys made between 30 and 50 responses/session and developed blood ethanol levels of approximately 400 mg%. Under this and other conditions of response-contingent delivery of ethanol, a negatively accelerated pattern of self-injection within sessions was demonstrated. Variations in the dose per injection (0.05–0.2 g/kg/injection) resulted in changes in the rate of lever-pressing; the number of self-injections was inversely related to dose. Ethanol intake increased only slightly with increased dose per injection. Noncontingent administration of various doses of i.v. ethanol immediately prior to a daily session decreased the number of responses; the total amount of ethanol administered (contingent plus noncontingent), however, remained constant over a pretreatment dose range of 1 to 3 g/kg. When access time to ethanol was increased from 3 to 6 h/day, the total amount of ethanol taken increased slightly. However, the blood ethanol levels at the end of a 6-h session closely approximated those obtained following 3-h sessions, indicating that during the last 3–4 h of the 6-h sessions, the rate of ethanol intake closely matched the rate of ethanol elimination.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46404/1/213_2004_Article_BF00426785.pd

    Constraints on the χ_(c1) versus χ_(c2) polarizations in proton-proton collisions at √s = 8 TeV

    Get PDF
    The polarizations of promptly produced χ_(c1) and χ_(c2) mesons are studied using data collected by the CMS experiment at the LHC, in proton-proton collisions at √s=8  TeV. The χ_c states are reconstructed via their radiative decays χ_c → J/ψγ, with the photons being measured through conversions to e⁺e⁻, which allows the two states to be well resolved. The polarizations are measured in the helicity frame, through the analysis of the χ_(c2) to χ_(c1) yield ratio as a function of the polar or azimuthal angle of the positive muon emitted in the J/ψ → μ⁺μ⁻ decay, in three bins of J/ψ transverse momentum. While no differences are seen between the two states in terms of azimuthal decay angle distributions, they are observed to have significantly different polar anisotropies. The measurement favors a scenario where at least one of the two states is strongly polarized along the helicity quantization axis, in agreement with nonrelativistic quantum chromodynamics predictions. This is the first measurement of significantly polarized quarkonia produced at high transverse momentum
    corecore